Goto

Collaborating Authors

 gan and gmm


On GANs and GMMs

Neural Information Processing Systems

A longstanding problem in machine learning is to find unsupervised methods that can learn the statistical structure of high dimensional signals. In recent years, GANs have gained much attention as a possible solution to the problem, and in particular have shown the ability to generate remarkably realistic high resolution sampled images. At the same time, many authors have pointed out that GANs may fail to model the full distribution (mode collapse) and that using the learned models for anything other than generating samples may be very difficult. In this paper, we examine the utility of GANs in learning statistical models of images by comparing them to perhaps the simplest statistical model, the Gaussian Mixture Model. First, we present a simple method to evaluate generative models based on relative proportions of samples that fall into predetermined bins.


Reviews: On GANs and GMMs

Neural Information Processing Systems

Major comments: This work examines GANs by comparing it to a simple mixture of factor analyzers (MFA) using NDB (a score based on sample histograms). The NDB computes the number of statistically different bins where the bins are obtained via Voronoi tessellation on k-means centroids. The key result is that the GMM/MFA is better able to capture the underlying distribution compared to GANs. When the MFA is combined with a pix2pix model, it generates sharp images comparable to the GAN model. Overall, this is a well-written paper with interesting results that question the overall utility of GANs.


On GANs and GMMs

Richardson, Eitan, Weiss, Yair

Neural Information Processing Systems

A longstanding problem in machine learning is to find unsupervised methods that can learn the statistical structure of high dimensional signals. In recent years, GANs have gained much attention as a possible solution to the problem, and in particular have shown the ability to generate remarkably realistic high resolution sampled images. At the same time, many authors have pointed out that GANs may fail to model the full distribution ("mode collapse") and that using the learned models for anything other than generating samples may be very difficult. In this paper, we examine the utility of GANs in learning statistical models of images by comparing them to perhaps the simplest statistical model, the Gaussian Mixture Model. First, we present a simple method to evaluate generative models based on relative proportions of samples that fall into predetermined bins.